Efficient algorithms for learning functions with bounded variation
نویسنده
چکیده
We show that the class FBV of [0, 1]-valued functions with total variation at most 1 can be agnostically learned with respect to the absolute loss in polynomial time from O ( 1 2 log 1 δ ) examples, matching a known lower bound to within a constant factor. We establish a bound of O(1/m) on the expected error of a polynomial-time algorithm for learning FBV in the prediction model, also matching a known lower bound to within a constant factor. Applying a known algorithm transformation to our prediction algorithm, we obtain a polynomial-time PAC learning algorithm for FBV with a sample complexity bound of O ( 1 log 1 δ ) ; this also matches a known lower bound to within a constant factor. © 2003 Elsevier Inc. All rights reserved.
منابع مشابه
On the generalization of Trapezoid Inequality for functions of two variables with bounded variation and applications
In this paper, a generalization of trapezoid inequality for functions of two independent variables with bounded variation and some applications are given.
متن کاملp-Lambda-bounded variation
A characteriation of continuity of the $p$-$Lambda$-variation function is given and the Helly's selection principle for $Lambda BV^{(p)}$ functions is established. A characterization of the inclusion of Waterman-Shiba classes into classes of functions with given integral modulus of continuity is given. A useful estimate on modulus of variation of functions of class $Lambda BV^{(p)}$ is found.
متن کاملA companion of Ostrowski's inequality for functions of bounded variation and applications
A companion of Ostrowski's inequality for functions of bounded variation and applications are given.
متن کاملSome Perturbed Inequalities of Ostrowski Type for Functions whose n-th Derivatives Are Bounded
We firstly establish an identity for $n$ time differentiable mappings Then, a new inequality for $n$ times differentiable functions is deduced. Finally, some perturbed Ostrowski type inequalities for functions whose $n$th derivatives are of bounded variation are obtained.
متن کاملAn Improved Particle Swarm Optimizer Based on a Novel Class of Fast and Efficient Learning Factors Strategies
The particle swarm optimizer (PSO) is a population-based metaheuristic optimization method that can be applied to a wide range of problems but it has the drawbacks like it easily falls into local optima and suffers from slow convergence in the later stages. In order to solve these problems, improved PSO (IPSO) variants, have been proposed. To bring about a balance between the exploration and ex...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Inf. Comput.
دوره 188 شماره
صفحات -
تاریخ انتشار 2004